6,391 research outputs found

    Derivatives and corporate risk management: participation and volume decisions in the insurance industry

    Get PDF
    In this paper we formulate and test a number of hypotheses regarding insurer participation and volume decisions in derivatives markets. Several specific hypotheses are supported by our analysis. We find evidence consistent with the idea that insurers are motivated to use financial derivatives to hedge the costs of financial distress, interest rate, liquidity, and exchange rate risks. We also find some evidence that insurers use these instruments to hedge embedded options and manage their tax bills. We also find evidence of significant economies of scale in the use of derivatives. Interestingly, we often find that the predetermined variables we employ display opposite signs in the participation and volume regressions. We argue that this result is broadly consistent with the hypothesis that there is also a per unit premium associated with hedging and that, conditional on having risk exposures large enough to warrant participation, firms with a larger appetite for risk will be less willing than average to pay this marginal cost.Corporations - Finance ; Derivative securities ; Financial services industry ; Business enterprises

    Derivatives and Corporate Risk Management: Participation and Volume Decisions in the Insurance Industry

    Get PDF
    The use of derivatives in corporate risk management has grown rapidly in recent years. In this paper, the authors explore the factors that influence the use of financial derivatives in the U.S. insurance industry. Their objective is to investigate the motivations for corporate risk management The authors use regulatory data on individual holdings and transactions in derivative markets. According to modern finance theory, shares of widely held corporations are held by diversified investors who operate in frictionless and complete markets and eliminate non-systematic risk through their portfolio choices. But this theory has been challenged by new hypotheses that take into account market imperfections, information asymmetries and incentive conflicts as motivations for corporate managers to change the risk/return profile of their firm. The authors develop a set of hypotheses regarding the hedging behavior of insurers and perform tests on a sample of life and property-liability insurers to test them. The sample consists of all U.S. life and property-liability insurers reporting to the NAIC. The authors investigate the decision to conduct derivatives transactions and the volume of transactions undertaken. There are two primary theories about the motivations for corporate risk management - maximization of shareholder value and maximization of managerial utility. The authors discuss these theories, the hypotheses they develop from them , and specify variables to test their hypotheses. They posit the following rationales for why corporations may choose to engage in risk management and also specify variables that help them study the use of these rationales by insurance firms: to avoid the costs of financial distress; to hedge part of their investment default/volatility/liquidity risks; to avoid shocks to equity that result in high leverage ratios; to minimize taxes and enhance firm value by reducing the volatility of earnings; to maximize managerial utility. The authors argue that the use of derivatives for speculative purposes in the insurance industry is not common. The authors analyze the decision by insurers to enter the market and their volume of transactions. They use probit analysis to study the participation decision and Tobit analysis along with Cragg's generalization of the Tobit analysis to study volume. The results provide support for the authors' hypothesis that insurers hedge to maximize shareholder value. The analysis provides only weak support for the managerial utility hypothesis. Insurers are motivated to use financial derivatives to reduce the expected costs of financial distress. There is also evidence that insurers use derivatives to hedge asset volatility and exchange rate risks. There is also evidence that there are significant economies of scale in running derivatives operations - only large firms and/or those with higher than average risk exposure find it worthwhile to pay the fixed cost of setting up a derivatives operation. Overall, insurers with higher than average asset risk exposures use derivative securities.

    The Basis Risk of Catastrophic-Loss Index Securities

    Get PDF
    This paper analyzes the basis risk of catastrophic-loss (CAT) index derivatives, which securitize losses from catastrophic events such as hurricanes and earthquakes. We analyze the hedging effectiveness of these instruments for 255 insurers writing 93 percent of the insured residential property values in Florida, the state most severely affected by exposure to hurricanes. County-level losses are simulated for each insurer using a sophisticated model developed by Applied Insurance Research. We analyze basis risk by measuring the effectiveness of hedge portfolios, consisting of a short position each insurer's own catastrophic losses and a long position in CAT-index call spreads, in reducing insurer loss volatility, value-at-risk, and expected losses above specified thresholds. Two types of loss indices are used -- a statewide index based on insurance losses in four quadrants of the state. The principal finding is that firms in the three largest Florida market-share quartiles can hedge almost as effectively using the intra-state index contracts as they can using contracts that settle on their own losses. Hedging with the statewide contracts is effective only for insurers with the largest market shares and for smaller insurers that are highly diversified throughout the state. The results also support the agency-theoretic hypotheses that mutual insurers are more diversified than stocks and that unaffiliated single firms are more diversified than insurers that are members of groups.

    The Incentive Effects of No Fault Automobile Insurance

    Get PDF
    This paper presents a theoretical and empirical analysis of the effects of no fault automobile insurance on accident rates. As a mechanism for compensating the victims of automobile accidents, no fault has several important advantages over the tort system. However, by restricting access to tort, no fault may weaken incentives for careful driving, leading to higher accident rates. We conduct an empirical analysis of automobile accident fatality rates in all U.S. states over the period 1982-1994, controlling for the potential endogeneity of no fault laws. The results support the hypothesis that no fault is significantly associated with higher fatal accident rates than tort.

    Pricing Excess-of-loss Reinsurance Contracts Against Catastrophic Loss

    Get PDF
    This paper develops a pricing methodology and pricing estimates for the proposed Federal excess-of- loss (XOL) catastrophe reinsurance contracts. The contracts, proposed by the Clinton Administration, would provide per-occurrence excess-of-loss reinsurance coverage to private insurers and reinsurers, where both the coverage layer and the fixed payout of the contract are based on insurance industry losses, not company losses. In financial terms, the Federal government would be selling earthquake and hurricane catastrophe call options to the insurance industry to cover catastrophic losses in a loss layer above that currently available in the private reinsurance market. The contracts would be sold annually at auction, with a reservation price designed to avoid a government subsidy and ensure that the program would be self supporting in expected value. If a loss were to occur that resulted in payouts in excess of the premiums collected under the policies, the Federal government would use its ability to borrow at the risk-free rate to fund the losses. During periods when the accumulated premiums paid into the program exceed the losses paid, the buyers of the contracts implicitly would be lending money to the Treasury, reducing the costs of government debt. The expected interest on these "loans" offsets the expected financing (borrowing) costs of the program as long as the contracts are priced appropriately. By accessing the Federal government's superior ability to diversify risk inter-temporally, the contracts could be sold at a rate lower than would be required in conventional reinsurance markets, which would potentially require a high cost of capital due to the possibility that a major catastrophe could bankrupt some reinsurers. By pricing the contacts at least to break even, the program would provide for eventual private-market "crowding out" through catastrophe derivatives and other innovative catastrophic risk financing mechanisms. We develop prices for the contracts using two samples of catastrophe losses: (1) historical catastrophic loss experience over the period 1949-1994 as reported by Property Claim Services; and (2) simulated catastrophe losses based on an engineering simulation analysis conducted by Risk Management Solutions. We used maximum likelihood estimation techniques to fit frequency and severity probability distributions to the catastrophic loss data, and then used the distributions to estimate expected losses under the contracts. The reservation price would be determined by adding an administrative expense charge and a risk premium to the expected losses for the specified layer of coverage. We estimate the expected loss component of the government's reservation price for proposed XOL contracts covering the entire U.S., California, Florida, and the Southeast. We used a loss layer of $25-50 billion for illustrative purposes.

    Regulatory solvency prediction in property-liability insurance: risk-based capital, audit ratios, and cash flow simulation

    Get PDF
    This paper analyzes the accuracy of the principal models used by U.S. insurance regulators to predict insolvencies in the property-liability insurance industry and compares these models with a relatively new solvency testing approach--cash flow simulation. Specifically, we compare the risk-based capital (RBC) system introduced by the National Association of Insurance Commissioners (NAIC) in 1994, the FAST (Financial Analysis and Surveillance Tracking) audit ratio system used by the NAIC, and a cash flow simulation model developed by the authors. Both the RBC and FAST systems are static, ratio-based approaches to solvency testing, whereas the cash flow simulation model implements dynamic financial analysis. Logistic regression analysis is used to test the models for a large sample of solvent and insolvent property-liability insurers, using data from the years 1990-1992 to predict insolvencies over three-year prediction horizons. We find that the FAST system dominates RBC as a static method for predicting insurer insolvencies. Further, we find the cash flow simulation variables add significant explanatory power to the regressions and lead to more accurate solvency prediction than the ratio-based models taken alone.Insurance industry

    Generating-function method for fusion rules

    Full text link
    This is the second of two articles devoted to an exposition of the generating-function method for computing fusion rules in affine Lie algebras. The present paper focuses on fusion rules, using the machinery developed for tensor products in the companion article. Although the Kac-Walton algorithm provides a method for constructing a fusion generating function from the corresponding tensor-product generating function, we describe a more powerful approach which starts by first defining the set of fusion elementary couplings from a natural extension of the set of tensor-product elementary couplings. A set of inequalities involving the level are derived from this set using Farkas' lemma. These inequalities, taken in conjunction with the inequalities defining the tensor products, define what we call the fusion basis. Given this basis, the machinery of our previous paper may be applied to construct the fusion generating function. New generating functions for sp(4) and su(4), together with a closed form expression for their threshold levels are presented.Comment: Harvmac (b mode : 47 p) and Pictex; to appear in J. Math. Phy

    Generating-function method for tensor products

    Full text link
    This is the first of two articles devoted to a exposition of the generating-function method for computing fusion rules in affine Lie algebras. The present paper is entirely devoted to the study of the tensor-product (infinite-level) limit of fusions rules. We start by reviewing Sharp's character method. An alternative approach to the construction of tensor-product generating functions is then presented which overcomes most of the technical difficulties associated with the character method. It is based on the reformulation of the problem of calculating tensor products in terms of the solution of a set of linear and homogeneous Diophantine equations whose elementary solutions represent ``elementary couplings''. Grobner bases provide a tool for generating the complete set of relations between elementary couplings and, most importantly, as an algorithm for specifying a complete, compatible set of ``forbidden couplings''.Comment: Harvmac (b mode : 39 p) and Pictex; this is a substantially reduced version of hep-th/9811113 (with new title); to appear in J. Math. Phy

    A precise CNOT gate in the presence of large fabrication induced variations of the exchange interaction strength

    Get PDF
    We demonstrate how using two-qubit composite rotations a high fidelity controlled-NOT (CNOT) gate can be constructed, even when the strength of the interaction between qubits is not accurately known. We focus on the exchange interaction oscillation in silicon based solid-state architectures with a Heisenberg Hamiltonian. This method easily applies to a general two-qubit Hamiltonian. We show how the robust CNOT gate can achieve a very high fidelity when a single application of the composite rotations is combined with a modest level of Hamiltonian characterisation. Operating the robust CNOT gate in a suitably characterised system means concatenation of the composite pulse is unnecessary, hence reducing operation time, and ensuring the gate operates below the threshold required for fault-tolerant quantum computation.Comment: 9 pages, 8 figure

    Relationships Between Long-Range Lightning Networks and TRMM/LIS Observations

    Get PDF
    Recent advances in long-range lightning detection technologies have improved our understanding of thunderstorm evolution in the data sparse oceanic regions. Although the expansion and improvement of long-range lightning datasets have increased their applicability, these applications (e.g., data assimilation, atmospheric chemistry, and aviation weather hazards) require knowledge of the network detection capabilities. The present study intercompares long-range lightning data with observations from the Lightning Imaging Sensor (LIS) aboard the Tropical Rainfall Measurement Mission (TRMM) satellite. The study examines network detection efficiency and location accuracy relative to LIS observations, describes spatial variability in these performance metrics, and documents the characteristics of LIS flashes that are detected by the long-range networks. Improved knowledge of relationships between these datasets will allow researchers, algorithm developers, and operational users to better prepare for the spatial and temporal coverage of the upcoming GOES-R Geostationary Lightning Mapper (GLM)
    • …
    corecore